1,349 research outputs found
Linearizability with Ownership Transfer
Linearizability is a commonly accepted notion of correctness for libraries of
concurrent algorithms. Unfortunately, it assumes a complete isolation between a
library and its client, with interactions limited to passing values of a given
data type. This is inappropriate for common programming languages, where
libraries and their clients can communicate via the heap, transferring the
ownership of data structures, and can even run in a shared address space
without any memory protection. In this paper, we present the first definition
of linearizability that lifts this limitation and establish an Abstraction
Theorem: while proving a property of a client of a concurrent library, we can
soundly replace the library by its abstract implementation related to the
original one by our generalisation of linearizability. This allows abstracting
from the details of the library implementation while reasoning about the
client. We also prove that linearizability with ownership transfer can be
derived from the classical one if the library does not access some of data
structures transferred to it by the client
Effective lock handling in stateless model checking.
Stateless Model Checking (SMC) is a verification technique for concurrent programs that checks for safety violations by exploring all possible thread interleavings. SMC is usually coupled with Partial Order Reduction (POR), which exploits the independence of instructions to avoid redundant explorations when an equivalent one has already been considered. While effective POR techniques have been developed for many different memory models, they are only able to exploit independence at the instruction level, which makes them unsuitable for programs with coarse-grained synchronization mechanisms such as locks. We present a lock-aware POR algorithm, LAPOR, that exploits independence at both instruction and critical section levels. This enables LAPOR to explore exponentially fewer interleavings than the state-of-the-art techniques for programs that use locks conservatively. Our algorithm is sound, complete, and optimal, and can be used for verifying programs under several different memory models. We implement LAPOR in a tool and show that it can be exponentially faster than the state-of-the-art model checkers
Weak persistency semantics from the ground up: formalising the persistency semantics of ARMv8 and transactional models
Emerging non-volatile memory (NVM) technologies promise the durability of disks with the performance of volatile memory (RAM). To describe the persistency guarantees of NVM, several memory persistency models have been proposed in the literature. However, the formal persistency semantics of mainstream hardware is unexplored to date. To close this gap, we present a formal declarative framework for describing concurrency models in the NVM context, and then develop the PARMv8 persistency model as an instance of our framework, formalising the persistency semantics of the ARMv8 architecture for the first time. To facilitate correct persistent programming, we study transactions as a simple abstraction for concurrency and persistency control. We thus develop the PSER (persistent serialisability) persistency model, formalising transactional semantics in the NVM context for the first time, and demonstrate that PSER correctly compiles to PARMv8. This then enables programmers to write correct, concurrent and persistent programs, without having to understand the low-level architecture-specific persistency semantics of the underlying hardware
PerSeVerE: persistency semantics for verification under ext4.
Although ubiquitous, modern filesystems have rather complex behaviours that are hardly understood by programmers and lead to severe software bugs such as data corruption. As a first step to ensure correctness of software performing file I/O, we formalize the semantics of the Linux ext4 filesystem, which we integrate with the weak memory consistency semantics of C/C++. We further develop an effective model checking approach for verifying programs that use the filesystem. In doing so, we discover and report bugs in commonly-used text editors such as vim, emacs and nano
Faster linearizability checking via -compositionality
Linearizability is a well-established consistency and correctness criterion
for concurrent data types. An important feature of linearizability is Herlihy
and Wing's locality principle, which says that a concurrent system is
linearizable if and only if all of its constituent parts (so-called objects)
are linearizable. This paper presents -compositionality, which generalizes
the idea behind the locality principle to operations on the same concurrent
data type. We implement -compositionality in a novel linearizability
checker. Our experiments with over nine implementations of concurrent sets,
including Intel's TBB library, show that our linearizability checker is one
order of magnitude faster and/or more space efficient than the state-of-the-art
algorithm.Comment: 15 pages, 2 figure
Persistency semantics of the Intel-x86 architecture
Emerging non-volatile memory (NVM) technologies promise the durability of disks with the performance of RAM. To describe the persistency guarantees of NVM, several memory persistency models have been proposed in the literature. However, the persistency semantics of the ubiquitous x86 architecture remains unexplored to date. To close this gap, we develop the Px86 (‘persistent x86’) model, formalising the persistency semantics of Intel-x86 for the first time. We formulate Px86 both operationally and declaratively, and prove that the two characterisations are equivalent. To demonstrate the application of Px86, we develop two persistent libraries over Px86: a persistent transactional library, and a persistent variant of the Michael–Scott queue. Finally, we encode our declarative Px86 model in Alloy and use it to generate persistency litmus tests automatically
An implementation of Deflate in Coq
The widely-used compression format "Deflate" is defined in RFC 1951 and is
based on prefix-free codings and backreferences. There are unclear points about
the way these codings are specified, and several sources for confusion in the
standard. We tried to fix this problem by giving a rigorous mathematical
specification, which we formalized in Coq. We produced a verified
implementation in Coq which achieves competitive performance on inputs of
several megabytes. In this paper we present the several parts of our
implementation: a fully verified implementation of canonical prefix-free
codings, which can be used in other compression formats as well, and an elegant
formalism for specifying sophisticated formats, which we used to implement both
a compression and decompression algorithm in Coq which we formally prove
inverse to each other -- the first time this has been achieved to our
knowledge. The compatibility to other Deflate implementations can be shown
empirically. We furthermore discuss some of the difficulties, specifically
regarding memory and runtime requirements, and our approaches to overcome them
- …